56 research outputs found

    Near real-time stereo vision system

    Get PDF
    The apparatus for a near real-time stereo vision system for use with a robotic vehicle is described. The system is comprised of two cameras mounted on three-axis rotation platforms, image-processing boards, a CPU, and specialized stereo vision algorithms. Bandpass-filtered image pyramids are computed, stereo matching is performed by least-squares correlation, and confidence ranges are estimated by means of Bayes' theorem. In particular, Laplacian image pyramids are built and disparity maps are produced from the 60 x 64 level of the pyramids at rates of up to 2 seconds per image pair. The first autonomous cross-country robotic traverses (of up to 100 meters) have been achieved using the stereo vision system of the present invention with all computing done onboard the vehicle. The overall approach disclosed herein provides a unifying paradigm for practical domain-independent stereo ranging

    Multi-Sensor Mud Detection

    Get PDF
    Robust mud detection is a critical perception requirement for Unmanned Ground Vehicle (UGV) autonomous offroad navigation. A military UGV stuck in a mud body during a mission may have to be sacrificed or rescued, both of which are unattractive options. There are several characteristics of mud that may be detectable with appropriate UGV-mounted sensors. For example, mud only occurs on the ground surface, is cooler than surrounding dry soil during the daytime under nominal weather conditions, is generally darker than surrounding dry soil in visible imagery, and is highly polarized. However, none of these cues are definitive on their own. Dry soil also occurs on the ground surface, shadows, snow, ice, and water can also be cooler than surrounding dry soil, shadows are also darker than surrounding dry soil in visible imagery, and cars, water, and some vegetation are also highly polarized. Shadows, snow, ice, water, cars, and vegetation can all be disambiguated from mud by using a suite of sensors that span multiple bands in the electromagnetic spectrum. Because there are military operations when it is imperative for UGV's to operate without emitting strong, detectable electromagnetic signals, passive sensors are desirable. JPL has developed a daytime mud detection capability using multiple passive imaging sensors. Cues for mud from multiple passive imaging sensors are fused into a single mud detection image using a rule base, and the resultant mud detection is localized in a terrain map using range data generated from a stereo pair of color cameras

    Water Detection Based on Sky Reflections

    Get PDF
    This software has been designed to detect water bodies that are out in the open on cross-country terrain at mid- to far-range (approximately 20 100 meters), using imagery acquired from a stereo pair of color cameras mounted on a terrestrial, unmanned ground vehicle (UGV). Non-traversable water bodies, such as large puddles, ponds, and lakes, are indirectly detected by detecting reflections of the sky below the horizon in color imagery. The appearance of water bodies in color imagery largely depends on the ratio of light reflected off the water surface to the light coming out of the water body. When a water body is far away, the angle of incidence is large, and the light reflected off the water surface dominates. We have exploited this behavior to detect water bodies out in the open at mid- to far-range. When a water body is detected at far range, a UGV s path planner can begin to look for alternate routes to the goal position sooner, rather than later. As a result, detecting water hazards at far range generally reduces the time required to reach a goal position during autonomous navigation. This software implements a new water detector based on sky reflections that geometrically locates the exact pixel in the sky that is reflecting on a candidate water pixel on the ground, and predicts if the ground pixel is water based on color similarity and local terrain feature

    Using Thermal Radiation in Detection of Negative Obstacles

    Get PDF
    A method of automated detection of negative obstacles (potholes, ditches, and the like) ahead of ground vehicles at night involves processing of imagery from thermal-infrared cameras aimed at the terrain ahead of the vehicles. The method is being developed as part of an overall obstacle-avoidance scheme for autonomous and semi-autonomous offroad robotic vehicles. The method could also be applied to help human drivers of cars and trucks avoid negative obstacles -- a development that may entail only modest additional cost inasmuch as some commercially available passenger cars are already equipped with infrared cameras as aids for nighttime operation

    Self-Supervised Learning of Terrain Traversability from Proprioceptive Sensors

    Get PDF
    Robust and reliable autonomous navigation in unstructured, off-road terrain is a critical element in making unmanned ground vehicles a reality. Existing approaches tend to rely on evaluating the traversability of terrain based on fixed parameters obtained via testing in specific environments. This results in a system that handles the terrain well that it trained in, but is unable to process terrain outside its test parameters. An adaptive system does not take the place of training, but supplements it. Whereas training imprints certain environments, an adaptive system would imprint terrain elements and the interactions amongst them, and allow the vehicle to build a map of local elements using proprioceptive sensors. Such sensors can include velocity, wheel slippage, bumper hits, and accelerometers. Data obtained by the sensors can be compared to observations from ranging sensors such as cameras and LADAR (laser detection and ranging) in order to adapt to any kind of terrain. In this way, it could sample its surroundings not only to create a map of clear space, but also of what kind of space it is and its composition. By having a set of building blocks consisting of terrain features, a vehicle can adapt to terrain that it has never seen before, and thus be robust to a changing environment. New observations could be added to its library, enabling it to infer terrain types that it wasn't trained on. This would be very useful in alien environments, where many of the physical features are known, but some are not. For example, a seemingly flat, hard plain could actually be soft sand, and the vehicle would sense the sand and avoid it automatically

    Processing Images of Craters for Spacecraft Navigation

    Get PDF
    A crater-detection algorithm has been conceived to enable automation of what, heretofore, have been manual processes for utilizing images of craters on a celestial body as landmarks for navigating a spacecraft flying near or landing on that body. The images are acquired by an electronic camera aboard the spacecraft, then digitized, then processed by the algorithm, which consists mainly of the following steps: 1. Edges in an image detected and placed in a database. 2. Crater rim edges are selected from the edge database. 3. Edges that belong to the same crater are grouped together. 4. An ellipse is fitted to each group of crater edges. 5. Ellipses are refined directly in the image domain to reduce errors introduced in the detection of edges and fitting of ellipses. 6. The quality of each detected crater is evaluated. It is planned to utilize this algorithm as the basis of a computer program for automated, real-time, onboard processing of crater-image data. Experimental studies have led to the conclusion that this algorithm is capable of a detection rate >93 percent, a false-alarm rate <5 percent, a geometric error <0.5 pixel, and a position error <0.3 pixel

    Vision-Aided Autonomous Landing and Ingress of Micro Aerial Vehicles

    Get PDF
    Micro aerial vehicles have limited sensor suites and computational power. For reconnaissance tasks and to conserve energy, these systems need the ability to autonomously land at vantage points or enter buildings (ingress). But for autonomous navigation, information is needed to identify and guide the vehicle to the target. Vision algorithms can provide egomotion estimation and target detection using input from cameras that are easy to include in miniature systems

    Vehicle Detection for RCTA/ANS (Autonomous Navigation System)

    Get PDF
    Using a stereo camera pair, imagery is acquired and processed through the JPLV stereo processing pipeline. From this stereo data, large 3D blobs are found. These blobs are then described and classified by their shape to determine which are vehicles and which are not. Prior vehicle detection algorithms are either targeted to specific domains, such as following lead cars, or are intensity- based methods that involve learning typical vehicle appearances from a large corpus of training data. In order to detect vehicles, the JPL Vehicle Detection (JVD) algorithm goes through the following steps: 1. Take as input a left disparity image and left rectified image from JPLV stereo. 2. Project the disparity data onto a two-dimensional Cartesian map. 3. Perform some post-processing of the map built in the previous step in order to clean it up. 4. Take the processed map and find peaks. For each peak, grow it out into a map blob. These map blobs represent large, roughly vehicle-sized objects in the scene. 5. Take these map blobs and reject those that do not meet certain criteria. Build descriptors for the ones that remain. Pass these descriptors onto a classifier, which determines if the blob is a vehicle or not. The probability of detection is the probability that if a vehicle is present in the image, is visible, and un-occluded, then it will be detected by the JVD algorithm. In order to estimate this probability, eight sequences were ground-truthed from the RCTA (Robotics Collaborative Technology Alliances) program, totaling over 4,000 frames with 15 unique vehicles. Since these vehicles were observed at varying ranges, one is able to find the probability of detection as a function of range. At the time of this reporting, the JVD algorithm was tuned to perform best at cars seen from the front, rear, or either side, and perform poorly on vehicles seen from oblique angles

    Terrain-Adaptive Navigation Architecture

    Get PDF
    A navigation system designed for a Mars rover has been designed to deal with rough terrain and/or potential slip when evaluating and executing paths. The system also can be used for any off-road, autonomous vehicles. The system enables vehicles to autonomously navigate different terrain challenges including dry river channel systems, putative shorelines, and gullies emanating from canyon walls. Several of the technologies within this innovation increase the navigation system s capabilities compared to earlier rover navigation algorithms
    corecore